10 research outputs found
Real-Time Online Re-Planning for Grasping Under Clutter and Uncertainty
We consider the problem of grasping in clutter. While there have been motion
planners developed to address this problem in recent years, these planners are
mostly tailored for open-loop execution. Open-loop execution in this domain,
however, is likely to fail, since it is not possible to model the dynamics of
the multi-body multi-contact physical system with enough accuracy, neither is
it reasonable to expect robots to know the exact physical properties of
objects, such as frictional, inertial, and geometrical. Therefore, we propose
an online re-planning approach for grasping through clutter. The main challenge
is the long planning times this domain requires, which makes fast re-planning
and fluent execution difficult to realize. In order to address this, we propose
an easily parallelizable stochastic trajectory optimization based algorithm
that generates a sequence of optimal controls. We show that by running this
optimizer only for a small number of iterations, it is possible to perform real
time re-planning cycles to achieve reactive manipulation under clutter and
uncertainty.Comment: Published as a conference paper in IEEE Humanoids 201
One-Shot Observation Learning
Observation learning is the process of learning a task by observing an expert
demonstrator. We present a robust observation learning method for robotic
systems. Our principle contributions are in introducing a one shot learning
method where only a single demonstration is needed for learning and in
proposing a novel feature extraction method for extracting unique activity
features from the demonstration. Reward values are then generated from these
demonstrations. We use a learning algorithm with these rewards to learn the
controls for a robotic manipulator to perform the demonstrated task. With
simulation and real robot experiments, we show that the proposed method can be
used to learn tasks from a single demonstration under varying conditions of
viewpoints, object properties, morphology of manipulators and scene
backgrounds
Learning to Efficiently Plan Robust Frictional Multi-Object Grasps
We consider a decluttering problem where multiple rigid convex polygonal
objects rest in randomly placed positions and orientations on a planar surface
and must be efficiently transported to a packing box using both single and
multi-object grasps. Prior work considered frictionless multi-object grasping.
In this paper, we introduce friction to increase picks per hour. We train a
neural network using real examples to plan robust multi-object grasps. In
physical experiments, we find a 13.7% increase in success rate, a 1.6x increase
in picks per hour, and a 6.3x decrease in grasp planning time compared to prior
work on multi-object grasping. Compared to single object grasping, we find a
3.1x increase in picks per hour
The Teenager's Problem: Efficient Garment Decluttering With Grasp Optimization
This paper addresses the ''Teenager's Problem'': efficiently removing
scattered garments from a planar surface. As grasping and transporting
individual garments is highly inefficient, we propose analytical policies to
select grasp locations for multiple garments using an overhead camera. Two
classes of methods are considered: depth-based, which use overhead depth data
to find efficient grasps, and segment-based, which use segmentation on the RGB
overhead image (without requiring any depth data); grasp efficiency is measured
by Objects per Transport, which denotes the average number of objects removed
per trip to the laundry basket. Experiments suggest that both depth- and
segment-based methods easily reduce Objects per Transport (OpT) by ;
furthermore, these approaches complement each other, with combined hybrid
methods yielding improvements of . Finally, a method employing
consolidation (with segmentation) is considered, which manipulates the garments
on the work surface to increase OpT; this yields an improvement of over
the baseline, though at a cost of additional physical actions
Learning manipulation planning from VR human demonstrations
The objective of this project is learning high-level manipulation planning skills from humans and transfer these skills to robot planners. We used virtual reality to generate data from human participants whilst they reached for objects on a cluttered table top. From this, we devised a qualitative representation of the task space to abstract human decisions, irrespective of the number of objects in the way. Based on this representation, human demonstrations were segmented and used to train decision classifiers. Using these classifiers, our planner produced a list of waypoints in the task space. These waypoints provide a high-level plan, which can be transferred to any arbitrary robot model. The VR dataset is released here